Missing data is a common concern in health datasets, and its impact on good decision-making processes is well documented. Our study's contribution is a methodology for tackling missing data problems using a combination of synthetic dataset generation, missing data imputation and deep learning methods to resolve missing data challenges. Specifically, we conducted a series of experiments with these objectives; $a)$ generating a realistic synthetic dataset, $b)$ simulating data missingness, $c)$ recovering the missing data, and $d)$ analyzing imputation performance. Our methodology used a gaussian mixture model whose parameters were learned from a cleaned subset of a real demographic and health dataset to generate the synthetic data. We simulated various missingness degrees ranging from $10 \%$, $20 \%$, $30 \%$, and $40\%$ under the missing completely at random scheme MCAR. We used an integrated performance analysis framework involving clustering, classification and direct imputation analysis. Our results show that models trained on synthetic and imputed datasets could make predictions with an accuracy of $83 \%$ and $80 \%$ on $a) $ an unseen real dataset and $b)$ an unseen reserved synthetic test dataset, respectively. Moreover, the models that used the DAE method for imputed yielded the lowest log loss an indication of good performance, even though the accuracy measures were slightly lower. In conclusion, our work demonstrates that using our methodology, one can reverse engineer a solution to resolve missingness on an unseen dataset with missingness. Moreover, though we used a health dataset, our methodology can be utilized in other contexts.
translated by 谷歌翻译
在过去的几年中,短视频在淘宝等电子商务平台上见证了迅速的增长。为了确保内容的新鲜感,平台需要每天发布大量新视频,从而使传统的点击率(CTR)预测方法遇到了该项目冷启动问题。在本文中,我们提出了一种有效的图形引导功能传输系统的礼物,以完全利用加热视频的丰富信息,以补偿冷启动的视频。具体而言,我们建立了一个异质图,其中包含物理和语义链接,以指导从热视频到冷启动视频的功能传输过程。物理链接代表明确的关系,而语义链接衡量了两个视频的多模式表示的接近性。我们精心设计功能传输功能,以使图表上不同Metapaths的不同类型的转移功能(例如,ID表示和历史统计)。我们在大型现实世界数据集上进行了广泛的实验,结果表明,我们的礼品系统的表现明显优于SOTA方法,并在TAOBAO APP的主页上为CTR带来了6.82%的提升。
translated by 谷歌翻译
We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN maps from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.
translated by 谷歌翻译